Introduction

The marketing and accounts receivable managers at our company have notified us that we have significant exposure to exchange rates. Our functional currency is U.S Dollars (USD), but we operate in the United Kingdom, European Union, and Japan. The exposure to exchange rates hits the gross revenue of out financial lines.

Our cash flow is also affected by the ebb and flow of accounts receivable components of working capital in producing and selling several products. When the exchange rates are volatile, so are our earnings. The goal of this project is to explore the relationships between the different markets to get a better understanding of how our earnings are affected by the exchange markets. This is especially important as we have missed our earnings forecasts for five consecutive quarters.

Part 1

Importing the Data

First, we will load in time series data of the exchange rates for the European, United Kingdom, Chinese, and Japanese markets. We will obtain the csv file from the turing.manhattan.edu website and view the structure and a sample of the exchange rates file.

library(zoo)      #For creating time series objects
library(xts)      #For time series analysis
library(ggplot2)  #For creating graphics
library(plotly)

#The URL for the exchange data data
URL <- "https://turing.manhattan.edu/~wfoote01/finalytics/data/exrates.csv"

#Reading in the exchange rates and omitting the missing data from the 
#url provided by turing.manhattan.edu and keeping the dates as characters
exrates <- na.omit(read.csv("exrates.csv", stringsAsFactors = F))
#Converting the string dates to actual dates
exrates$DATE <- as.Date(exrates$DATE, "%m/%d/%Y")

#Five columns (date, eur2usd, gbp2usd, cny2usd, jpy2usd)
#the data is daily exchange rates
head(exrates)     #Looking at the data
##         DATE USD.EUR USD.GBP USD.CNY USD.JPY
## 1 2013-01-28  1.3459  1.5686  6.2240   90.73
## 2 2013-01-29  1.3484  1.5751  6.2259   90.65
## 3 2013-01-30  1.3564  1.5793  6.2204   91.05
## 4 2013-01-31  1.3584  1.5856  6.2186   91.28
## 5 2013-02-01  1.3692  1.5744  6.2265   92.54
## 6 2013-02-04  1.3527  1.5737  6.2326   92.57
tail(exrates)     #Looking at the end of the data
##            DATE USD.EUR USD.GBP USD.CNY USD.JPY
## 1248 2018-01-19  1.2238  1.3857  6.3990  110.56
## 1249 2018-01-22  1.2230  1.3944  6.4035  111.15
## 1250 2018-01-23  1.2277  1.3968  6.4000  110.46
## 1251 2018-01-24  1.2390  1.4198  6.3650  109.15
## 1252 2018-01-25  1.2488  1.4264  6.3189  108.70
## 1253 2018-01-26  1.2422  1.4179  6.3199  108.38
str(exrates)      #Viewing the structure of the data
## 'data.frame':    1253 obs. of  5 variables:
##  $ DATE   : Date, format: "2013-01-28" "2013-01-29" ...
##  $ USD.EUR: num  1.35 1.35 1.36 1.36 1.37 ...
##  $ USD.GBP: num  1.57 1.58 1.58 1.59 1.57 ...
##  $ USD.CNY: num  6.22 6.23 6.22 6.22 6.23 ...
##  $ USD.JPY: num  90.7 90.7 91 91.3 92.5 ...
#1253 different instances of exchange rates
summary(exrates)  #From 28 Jan 2013 to 26 Jan 2018
##       DATE               USD.EUR         USD.GBP         USD.CNY     
##  Min.   :2013-01-28   Min.   :1.038   Min.   :1.212   Min.   :6.040  
##  1st Qu.:2014-04-25   1st Qu.:1.107   1st Qu.:1.324   1st Qu.:6.178  
##  Median :2015-07-27   Median :1.158   Median :1.514   Median :6.261  
##  Mean   :2015-07-26   Mean   :1.199   Mean   :1.474   Mean   :6.401  
##  3rd Qu.:2016-10-24   3rd Qu.:1.314   3rd Qu.:1.573   3rd Qu.:6.627  
##  Max.   :2018-01-26   Max.   :1.393   Max.   :1.716   Max.   :6.958  
##     USD.JPY      
##  Min.   : 90.65  
##  1st Qu.:102.14  
##  Median :109.88  
##  Mean   :109.33  
##  3rd Qu.:116.76  
##  Max.   :125.58
# USD to CNY appears to be the most steady

Question 1: Nature of Exchange Rates

Exchange rates are simply the value, or ratio of one nation’s currency purchasing power in others. Because we are interested in how each exchange rate changes over time we will want to look at the percent change in the exchange rate on a daily basis. To calculate the percent change in the exchange rate over time, we will use the difference of logarithms for the sequential data. These calculated numbers will be in the units of percent change. For the purposes of business analysis and as it relates to the financial analytics course, the small changes in the log of a variable will be directly interpretable as percent changes, to a very close approximation.

#Using the properties of the natural log to determine
#the percent change in exchange rates
exrates.r <- diff(log(as.matrix(exrates[,-1]))) * 100
head(exrates.r)   #first 6 days of percent change
##      USD.EUR     USD.GBP     USD.CNY     USD.JPY
## 2  0.1855770  0.41352605  0.03052233 -0.08821260
## 3  0.5915427  0.26629486 -0.08837968  0.44028690
## 4  0.1473405  0.39811737 -0.02894123  0.25228994
## 5  0.7919091 -0.70886373  0.12695761  1.37092779
## 6 -1.2124033 -0.04447127  0.09792040  0.03241316
## 7  0.3100091 -0.54159233 -0.05456676  0.82836254
tail(exrates.r)   #last 6 days of percent change
##          USD.EUR    USD.GBP     USD.CNY    USD.JPY
## 1248  0.00000000 -0.2306640 -0.28869056 -0.2890175
## 1249 -0.06539153  0.6258788  0.07029877  0.5322280
## 1250  0.38356435  0.1719691 -0.05467255 -0.6227176
## 1251  0.91621024  1.6332111 -0.54837584 -1.1930381
## 1252  0.78784876  0.4637771 -0.72690896 -0.4131289
## 1253 -0.52990891 -0.5976884  0.01582429 -0.2948224
str(exrates.r)    #Shows there are 4 columns with 1252 instances
##  num [1:1252, 1:4] 0.186 0.592 0.147 0.792 -1.212 ...
##  - attr(*, "dimnames")=List of 2
##   ..$ : chr [1:1252] "2" "3" "4" "5" ...
##   ..$ : chr [1:4] "USD.EUR" "USD.GBP" "USD.CNY" "USD.JPY"
#creating a matrix for the volatility of the exchange markets
size <- na.omit(abs(exrates.r))
head(size)        #viewing the first 6 rows of the size matrix
##     USD.EUR    USD.GBP    USD.CNY    USD.JPY
## 2 0.1855770 0.41352605 0.03052233 0.08821260
## 3 0.5915427 0.26629486 0.08837968 0.44028690
## 4 0.1473405 0.39811737 0.02894123 0.25228994
## 5 0.7919091 0.70886373 0.12695761 1.37092779
## 6 1.2124033 0.04447127 0.09792040 0.03241316
## 7 0.3100091 0.54159233 0.05456676 0.82836254
#Creating a vector of names for the matrix that has the same name as
#the exrates matrix, but with ".size" appended at the end
colnames(size) <- paste(colnames(size), ".size", sep = "")
#viewing the first six rows exchange rates size matrix
head(size)
##   USD.EUR.size USD.GBP.size USD.CNY.size USD.JPY.size
## 2    0.1855770   0.41352605   0.03052233   0.08821260
## 3    0.5915427   0.26629486   0.08837968   0.44028690
## 4    0.1473405   0.39811737   0.02894123   0.25228994
## 5    0.7919091   0.70886373   0.12695761   1.37092779
## 6    1.2124033   0.04447127   0.09792040   0.03241316
## 7    0.3100091   0.54159233   0.05456676   0.82836254
#Creating an empty matrix with the dimensions of exrates.r
direction <- exrates.r
direction[exrates.r > 0] <- 1     #setting the matrix > 0 = 1
direction[exrates.r < 0] <- -1    #setting the matrix > 0 = -1
direction[exrates.r == 0] <- 0    #setting the matrix == 0 = 0
#setting the column names to the exchange market with ".dir" appended
colnames(direction) <- paste(colnames(exrates.r), ".dir", sep = "")

#Converting into a time series object
#Vector of only dates
#removing the first index of the dates because of the diff function
dates <- exrates$DATE[-1]
#Creating a matrix that has has the percent change, absolute value
#of the percent change, and the whether the foreign exchange 
#appreciated, depreciated, or stayed the same
values <- cbind(exrates.r, size, direction)
#Creating the data frame with the dates, returns, size, and direction
exrates.df = data.frame(dates = dates, returns = exrates.r,
    size = size, direction = direction)
#Viewing structure of the data to ensure all is looking normal
str(exrates.df)
## 'data.frame':    1252 obs. of  13 variables:
##  $ dates                : Date, format: "2013-01-29" "2013-01-30" ...
##  $ returns.USD.EUR      : num  0.186 0.592 0.147 0.792 -1.212 ...
##  $ returns.USD.GBP      : num  0.4135 0.2663 0.3981 -0.7089 -0.0445 ...
##  $ returns.USD.CNY      : num  0.0305 -0.0884 -0.0289 0.127 0.0979 ...
##  $ returns.USD.JPY      : num  -0.0882 0.4403 0.2523 1.3709 0.0324 ...
##  $ size.USD.EUR.size    : num  0.186 0.592 0.147 0.792 1.212 ...
##  $ size.USD.GBP.size    : num  0.4135 0.2663 0.3981 0.7089 0.0445 ...
##  $ size.USD.CNY.size    : num  0.0305 0.0884 0.0289 0.127 0.0979 ...
##  $ size.USD.JPY.size    : num  0.0882 0.4403 0.2523 1.3709 0.0324 ...
##  $ direction.USD.EUR.dir: num  1 1 1 1 -1 1 -1 -1 -1 1 ...
##  $ direction.USD.GBP.dir: num  1 1 1 -1 -1 -1 1 1 1 -1 ...
##  $ direction.USD.CNY.dir: num  1 -1 -1 1 1 -1 1 1 0 0 ...
##  $ direction.USD.JPY.dir: num  -1 1 1 1 1 1 1 -1 -1 1 ...
#Converts the matrix into a time series object
exrates.xts  <- na.omit(as.xts(values, dates))
#Viewing the structure of the time series to make sure nothing is wrong
str(exrates.xts)
## An 'xts' object on 2013-01-29/2018-01-26 containing:
##   Data: num [1:1252, 1:12] 0.186 0.592 0.147 0.792 -1.212 ...
##  - attr(*, "dimnames")=List of 2
##   ..$ : NULL
##   ..$ : chr [1:12] "USD.EUR" "USD.GBP" "USD.CNY" "USD.JPY" ...
##   Indexed by objects of class: [Date] TZ: UTC
##   xts Attributes:  
##  NULL
#converting from the xts object to a zooreg object
exrates.zr <- na.omit(as.zooreg(exrates.xts))
#Looking at the structure of the of the zooreg object
str(exrates.zr)
## 'zooreg' series from 2013-01-29 to 2018-01-26
##   Data: num [1:1252, 1:12] 0.186 0.592 0.147 0.792 -1.212 ...
##  - attr(*, "dimnames")=List of 2
##   ..$ : NULL
##   ..$ : chr [1:12] "USD.EUR" "USD.GBP" "USD.CNY" "USD.JPY" ...
##   Index:  Date[1:1252], format: "2013-01-29" "2013-01-30" "2013-01-31" "2013-02-01" "2013-02-04" ...
##   Frequency: 1

The result of the above code converts the daily foreign exchange rates to the percent change in the foreign exchange rates, creates a matrix of the absolute value of the percent change, and creates another matrix that indicates whether the exchange rate appreciated, depreciated, or remained the same. Finally, the above block combines all the data created and converts them in to a zoo time series object using the date vector.

Next, we will create an interactive visual plot of the percent daily change for the four exchange markets ranging from 2013 to 2018 using the ggplot2 and plotly libraries.

#Creates the title for the the plot
title.chg <- "Exchange Rates Percent Change"
#Creating the ggplot object for the percent change with a y-limit of -5% to 5%
#for each of the four foreign exchange market
p1 <- autoplot.zoo(exrates.zr[,1:4]) + ggtitle(title.chg) + ylim(-5,5)
#Displays an interactive plot for the ggplot object created above
ggplotly(p1)
#Sets a title for the size plot
title.size <- "Absolute Value of Percent Change"
#Creates a ggplot object for the size of the the change with the y limit 
#from 0 to 5
p2 <- autoplot.zoo(exrates.zr[,5:8]) + ggtitle(title.size) + ylim(0,5)

The above interactive plots show the daily percent change in the foreign exchange markets relative to the U.S Dollar. These plots also show that the foreign exchange markets for the Japanese yen consistently displays the most volatile behavior, while the exchange market for the Chinese yuan appears to be the most steady. The foreign exchange markets for the British pound and the euro have a fairly similar volatility, a magnitude somewhere between the yuan and yen. This makes intuitive sense as until recently, the UK was a part of the European Union and had their markets fairly entwined. There is some excess volatility in the GBP exchange market around June 2016 as the United Kingdom voted to leave the European Union, allowing for political speculation of the future of the UK.

The relatively flat line for the percent change in the exchange rate from USD to CNY shows that the value of the yuan is generally fixed against the price of the dollar, meaning it you could buy approximately the same amount of yuan with the U.S Dollar during any time in that timespan. This fixed price against the dollar is significant, because it keeps the yuan cheap, even as the Chinese economy continues to grow. Although, it is worth noting that the yuan exchange rate has shown some an increase in volatility, in particular its magnitudes since late 2015, and more recently in 2017. This may be a result of a key yuan index changing its calculation method.

Using this plot, we can see that the volatility in our earnings due to the foreign exchange market is primarily caused by our business in Japan. The volatility in GBP and euro markets is not as significant as the yen, but could cause significant issues to our cash flow, especially depending on the volume of revenue. If we were looking at potentially accessing the Chinese market to stabilize some of the volatility in our cash flow, we would want to do more research on the barrier of entry to the Chinese market as the economy of China is dominated by State owned enterprises, and thus the political risk may outweigh the financial gains. It would also be beneficial to see if our prices would be competitive in the Chinese market as the Chinese economy has been growing at a higher rate than the United States, while valuing the currency at approximately the same price, seemingly devaluing the yuan.

Question 2: Foreign Exchange Market Relationships and Descriptive Statistics

Next, we will be interested in seeing how the foreign exchange markets interact, and whether events in the past have an effect on the current market. This will be viewed in terms of the autocorrelations in percent change and autocorrelations in the size of the change. Using the partial autocorrelation, we will be able to see how long of a memory the markets have and which days are significant in determining today’s market. First we will create an autocorrelation matrix for the percent change in the markets.

#Creating an autocorrelation matrix for each market
#The diagonal is shows the memory in the past for the same market
#The x-axis is the amount of lag 
#If x-axis is negative, the exchange market that is listed first
#in the title is the exchange market that is being lagged
acf(coredata(exrates.xts[,1:4]))

The vertical lines that appear along the x-axis show the autocorrelation coefficient, and vertical lines that appear above the dashed line have a significant autocorrelation coefficient. The autocorrelation matrix shows that there is some correlation in the percent change between the markets on the same day (zero lag), but not really any memory in the markets to create accurate forecasts with this data alone.

The most significant same day correlations are between the euro/GBP interaction and between the Euro/JPY interaction. Same day positive movement, either by the GBP or EUR yields positive movement in the other. WIth the correlation coefficient of ~0.5, 25% of the moment in either the EUR or GBP can be explained by the other . The interactions of the Chinese yuan and every other market appears to be less significant than the all other interactions. There also appears to be little interaction between the change in the GBP exchange rate and the change in the JPY exchange rate.

Next, we will create an autocorrelation matrix for the sizes in the percent changes. Interactions that are significant will mean that a large magnitude of returns breeds more large magnitude of returns (independent of direction), and that a small magnitude of returns breeds more smaller magnitude of returns.

#Creating an autocorrelation matrix for the foreign
#exchange markets for columns 5-7 (Size of the 
#percent change). Not including the japanese market
#to have larger plots
acf(coredata(exrates.xts[,5:8]))

Using the autocorrelation matrix for the magnitude of percent change in the foreign exchange markets, we can see that there is some memory within the same market. This can be shown by looking along the diagonal of the matrix and seeing a multiple vertical lines outside the range of the confidence interval. The memory within the same market is also more relevant in the short-term (lags 1-5) as the frequency of autocorrelation intervals above the confidence interval is greater than the frequency of significant autocorrelation terms with a greater lag; but, there seems to be some persistence in the market as there are some coefficients outside the confidence interval around a lag of 20. This graphic shows that it is likely that a certain magnitude of returns today will bring a like magnitude of returns in the future. Whether the exchange rate will appreciate or depreciate is not within the scope of this graphic as we are only looking at the size of the percent change. The magnitude of returns measured by the autocorrelation matrix is independent of the projected appreciation and depreciation of the currencies.

Next we will use the partial autocorrelation between the foreign exchange markets to see if there is any correlation in the interactions between markets in the past, independent of what has happened between the lag date and the present.

#Creating a partial 
pacf(coredata(exrates.xts[,1:4]))

The partial autocorrelation matrix for the foreign exchange markets shows that there is a significant amount of memory in the GBP and EUR exchange markets in terms of the CNY exchange rate. This is particularly apparent around the 10 lag mark. The partial autocorrelation matrix also shows that there is a significant amount of memory in the JPY exchange in terms of past events in the CNY exchange. While it is interesting to note that the partial autocorrelation of the CNY exchange is significant with all the other exchanges, it could also be worth noting that there is not a lot of memory in the CNY exchange when when looking at last event in its own exchange. This could be typical for the other exchanges, but the CNY exchange does not appear to behave in the same manner as the other exchanges. This could be the result of the CNY exchange essentially mimicking the daily percent changes of the USD.

Next we will create a partial autocorrelation plot that will show whether the size of the percent change in the exchange rate in the past has an effect on the percent change of today’s rate, independent of what has happened between the lag and the present.

#Partial autocorrelation on the magnitude of percent change
pacf(coredata(exrates.xts[,5:8]))

The partial autocorrelation for value of the percent change in the market indicate that the EUR exchange rate maintains significant memory up to 15 days prior, and even some correlation to around 25 days ago. The GBP shows similar artifacts up to 14 days prior, surprisingly there is little relationship between the EUR and GBP in terms of volatility. Although the currencies are within the same continents, and therefore a stronger relationship can be assumed- the assessment of the partial autocorrelation indicates that they are not strongly correlated. The most important information of these plots however is how the Chinese yuan effects significantly affects the volatility in all other markets, while it is shown no volatility relationship of others on its own. The yuan market maintains a memory of day lags of 1, 2, 3, 7.

Next we will create a function to view some descriptive statistics based on magnitude of percent change in the various exchange markets.

data_moments <- function(data) {
    library(moments)                            #Package for skewness and kurtosis
    library(matrixStats)                        #Package for statistics on matrix columns
    mean.r <- colMeans(data)                    #Calculates the mean for each column
    median.r <- colMedians(data)                #Calculates the median for each column
    sd.r <- colSds(data)                        #Standard deviation for each column
    IQR.r <- colIQRs(data)                      #Difference between Q1 and Q3
    skewness.r <- skewness(data)                #Skewness for each column
    kurtosis.r <- kurtosis(data)                #kurtosis for each column
    #Creates a data frame with the statistics for each column
    result <- data.frame(mean = mean.r,
        median = median.r, std_dev = sd.r, 
        IQR = IQR.r, skewness = skewness.r, 
        kurtosis = kurtosis.r)
    return(result)
}
#Using the data moments function on the size of the percent change
answer <- data_moments(exrates.xts[,5:8])
#knitting the table to display in a nice formatted table, rounded to 4 decimals
knitr::kable(answer, digits = 4)
mean median std_dev IQR skewness kurtosis
USD.EUR.size 0.4003 0.2935 0.3695 0.4313 1.7944 8.0424
USD.GBP.size 0.4008 0.2995 0.4266 0.4173 6.0881 93.5604
USD.CNY.size 0.1027 0.0601 0.1375 0.1154 3.9004 31.0222
USD.JPY.size 0.4533 0.3250 0.4455 0.4684 2.2201 10.4898

The data moments table above shows the calculated statistics for each of the four foreign exchange markets. In support of our initial analysis, the Japanese yen exchange is the most volatile, the mean and median volatility for the euro and the Pound are relatively similar, and the Chinese yuan exchange is the least volatile. The standard deviation for the pound is slightly greater than the standard deviation of the euro, most likely from the increased volatility due to Brexit in 2016. This increased volatility in the GBP is also an explanation for the huge kurtosis number found in the GBP exchange.

All of the markets have a high kurtosis coefficient relative to the normal distribution, which means that there are more extreme events in the tails of the distribution than a normal distribution. The skewness of the USD to GBP exchange is positive, but this does not necessarily mean the dollar is appreciating in these extreme situation, as we are analyzing the absolute value of change, without any indication of what direction. Further analysis of the distribution of the actual percent changes of the various exchange rates would give us a better idea of which markets would provide a better exchange rate. Furthermore, the skewness of each of the markets are all heavily right skewed, with the GBP and Chinese markets producing the most right-sided distributions, but this is to be expected because the markets are limited to 0 on the left-tailed side.

Next we will calculate the average of the actual returns for each of the foreign exchange markets. We will expect the markets to be centered around zero. We would prefer the Chinese and Japanese exchange rates to be slightly positive and the GBP and euro exchange rates to be negative. This is so because the Chinese and Japanese exchange rates are valued as how many yuan or yen you can purchase with a dollar; a positive number would indicate that we can buy more of the currency with a dollar, and a negative number would indicate that we would buy less of the currency with a dollar. Additionally, we would prefer the euro and the GBP to have a negative mean percent change because the exchange rate is provided in terms of how much USD is needed to purchase one euro or GBP. A negative rate means that it takes fewer USD to buy one GBP or euro, increasing our purchase power.

#Looking at the mean percent change all the exhange rates
colMeans(exrates.xts[, 1:4])
##      USD.EUR      USD.GBP      USD.CNY      USD.JPY 
## -0.006404068 -0.008067620  0.001221294  0.014197724

The Japanese market displays the highest mean for return of all four of the markets. The Japanese foreign exchange market also had the mean exchange rate that was the furthest from zero. Additionally all of the markets were on our preferred side of zero, indicating that the USD has done well compared to each of the other currencies over the same period of time.

Part 2

Introduction

We want to characterize the distribution of up and down movements visually. Also we would like to repeat the analysis periodically for inclusion in management reports.

Question 1: Distribution of returns

To better understand the daily behavior of exchange rates and what our exposure to the euro looks like, we would want to create an estimated cumulative density function to model the distribution of returns will allow us to see how likely it is for the daily change in the euro exchange to be below a specified quantile. In creating a investing plan that has a 5% risk allowance, we can use this estimated cumulative density function to see at what percentage U.S Dollar depreciation against the euro is past our tolerance for risk. The following Block of R code will set a tolerable risk percentage to 0.95 and find out at what rate has the dollar depreciated outside of our risk tolerance.

#Setting a tolerable rate of 95%
exrates.tol.pct <- 0.95
#setting exrates.tol to be the value that has 95% of returns below that value
exrates.tol <- quantile(exrates.df$returns.USD.EUR, 
    exrates.tol.pct)
#combining variables to create a label that states
#what returns are at the 95%
exrates.tol.label <- paste("Tolerable Rate = ", 
    round(exrates.tol, 2), "%", sep = "")
#creating a ggplot object with the exrates data frame using the usd to euro exchange rate
#using the cumulative density function to create the plot in a blue color and drawing a 
#red vertical line at the 95% and adding text to the plot that the tolerable rate is 
#at 95%
p <- ggplot(exrates.df, aes(returns.USD.EUR,
    fill = direction.USD.EUR.dir)) + 
    stat_ecdf(colour = "blue", size = 0.75) + 
    geom_vline(xintercept = exrates.tol, 
        colour = "red", size = 1.5) + 
    annotate("text", x = exrates.tol + 
        1, y = 0.75, label = exrates.tol.label, 
        colour = "darkred")
#showing the plot
p

The estimated cumulative density function shows that on a given day there is a 95% percent chance that the percent change in the foreign exchange between the dollar and the euro will be below 0.88%. This also means that there is only a 5% chance that the dollar will depreciate against the pound by more than 0.88%. If the dollar has depreciated by more than 0.88% in a recent date, we should look to conduct further analysis of the exchange rate before buying the euro.

The distribution of percent changes in the exchange between the dollar and the euro also appears fairly symmetrical around the y-intercept, with approximately 50% of the time the dollar will appreciate and 50% of the time the dollar will appreciate. A further analysis to better understand our exposure to the euro would involve us looking at the shape of the positive percent changes in exchange rate versus the negative percent changes in the exchange rate.

Question 2:

While looking at the statistics and correlations between the market over a long period of time can be helpful, it is important to see how theses relations and statistics have changed over time. It is also important to have this analysis be repeatable, saving time and effort creating the code. First we will look at the cross correlations function in the percent change between the euro and the GBP.

#creating a timeseries for the euro exchange rate
one <- ts(exrates.df$returns.USD.EUR)
#creating a timeseries for the british pound exchange rate
two <- ts(exrates.df$returns.USD.GBP)
#Creating a cross-correlation function to estimate the 
#the cross correlation with both positive and negative lag
#with a maximum lag of 20 and having a red confidence interval
ccf(one, two, main = "GBP vs. EUR", lag.max = 20, 
    xlab = "", ylab = "", ci.col = "red")

The cross-correlation plot for the GBP and the euro shows that there seems to be some small raw correlations across time with raw returns. Moreover, we see volatility of correlation clustering in using return sizes. This means that the same risk will occur in both countries.

Next we will create a function that will create a cross-correlation plot for any two vectors that are equal length. This will save time when evaluating the cross correlations between the euro and the GBP with future data, and save time when comparing correlations with new currencies.

#creating a functtion that has inputs of two columns of a 
#data frame, a title, the lag, abd the color of the confidence interval on the plot
run_ccf <- function(one, two, main = "one vs. two", 
    lag = 20, color = "red") {
    #If the lengths of the two vectors are not the same length, stop
    stopifnot(length(one) == length(two))
    #convert the vectors into timeseries
    one <- ts(one)
    two <- ts(two)
    #setting the title to main
    main <- main
    #Setting the lag to the lag
    lag <- lag
    #setting the color to the color
    color <- color
    #creating the cross correlation function 
    ccf(one, two, main = main, lag.max = lag, 
        xlab = "", ylab = "", ci.col = color)
    # end run_ccf
}
#setting the first vector to the the euro exchange
one <- exrates.df$returns.USD.EUR
#setting the second vector to the GBP exchange
two <- exrates.df$returns.USD.GBP
#creating the title for the figure
title <- "EUR vs. GBP"
#Running the function with the arguments created
run_ccf(one, two, main = title, lag = 20, 
    color = "red")

As the previous graphic had shown, there seems to be some small raw correlations across time with raw returns. Moreover, we see volatility of correlation clustering in using return sizes. This also shows that the exact same plots can be created with simply using a function, making it repeatable code, instead of by writing a single script.

Next, we will use the run_ccf function we had created in the previous block of code to create the ccf plot for the volatility of returns in the GBP and euro exchanges.

#Creating a vector from the euro zoo object
one <- exrates.zr[, 5]
#Creating a vector from the GBP zoo object
two <- exrates.zr[, 6]
#Creating a title for our plot
title <- "EUR vs. GBP: volatility"
#Running the cross correlation function we created
run_ccf(one, two, main = title, lag = 20, 
    color = "red")

# We see some small raw correlations
# across time with raw returns. More
# revealing, we see volatility of
# correlation clustering using return
# sizes.

The cross-correlation function between the GBP and the euro, in terms of the magnitude of percent change, shows that there is even more volatility clustering in terms of the size of the change over time, and with even more significance. This would signify that there is persistence, and even some spillover from the two markets.

Next, we will look at how the correlations between the two markets have changed over time by creating a function that will calculate the correlations in raw returns and the correlations in the size of the returns. These functions will be paired with the roll_apply function, which allows us to look at these statistics in a rolling window we will specify as 90 days.

#Starting a function that takes in a data frame 
corr_rolling <- function(x) {
    #getting the number of columns in the data frame
    dim <- ncol(x)
    #taking the correlation of the data frame and index the lower triangle
    #of the square matrix, excluding the diagonal
    corr_r <- cor(x)[lower.tri(diag(dim), 
        diag = FALSE)]
    #returning the lower triangle of the correlation matrix
    return(corr_r)
}
#Create a function for calculating the rolling volatility
vol_rolling <- function(x) {
    #loading the matrix statistics library
    library(matrixStats)
    #calculating the volatility if the columns of the data frame
    vol_r <- colSds(x)
    #returning the standard deviations of the columns
    return(vol_r)
}
#Creating a matrix of the returns
ALL.r <- exrates.xts[, 1:4]
#Creating a window of 90 days
window <- 90  #reactive({input$window})
#rollapply is applying the corr_rolling function to the All.r data frame
#on a rolling window of 90 days on each column individually
corr_r <- rollapply(ALL.r, width = window, 
    corr_rolling, align = "right", by.column = FALSE)
#Assigning the column names to the correlations between the columns
colnames(corr_r) <- c("EUR.GBP", "EUR.CNY", 
    "EUR.JPY", "GBP.CNY", "GBP.JPY", 
    "CNY.JPY")
#rollapply is applying the vol_rolling function to the All.r data frame
#on a rolling window of 90 days on each column individually
vol_r <- rollapply(ALL.r, width = window, 
    vol_rolling, align = "right", by.column = FALSE)
#assigning the column names to the vol_r data frame
colnames(vol_r) <- c("EUR.vol", "GBP.vol", 
    "CNY.vol", "JPY.vol")
#creating a vector that takes the year from the index and just extracts the year
year <- format(index(corr_r), "%Y")
#combining the data frames for the returns, rolling returns correlation, rolling
#volatility correlation, and the year for the date
r_corr_vol <- merge(ALL.r, corr_r, vol_r, 
    year)

The rolling correlations in volatility and raw returns give us more context of how recent and historical events have affected the correlations between two markets. This can be helpful when planning operations or other business venture for the quarter in the various markets we are operating.

Question 3: Correlations and Volatilities

One final question we will want to answer is: are the correlations and volatilities related? This is important because we would want to know if there was an increased volatility in one market, would we want to pull our projects out of another market. An increase in correlation could also mean we would want to increase operations, it just depends on whether they are positively or negatively correlated. We will explore this with a quantile regression and a linear regression of the log correlation in the chinese and japanese exchanges, dependent on the volatility in the Japanese Market. On exploration of the correlations between these two exchanges makes sense with the volatility of the Japanese yen as the input, because we found some memory in the yuan based on past event in the yen exchange.

#importing the quantile regression library
library(quantreg)
## Loading required package: SparseM
## 
## Attaching package: 'SparseM'
## The following object is masked from 'package:base':
## 
##     backsolve
#Creating a vector from 0.05 to 0.95 in which the regression will attempt
#to reduce error from that given quantile at a given x
taus <- seq(0.05, 0.95, 0.05)  # Roger Koenker UIC Bob Hogg and Allen Craig
#running a quantile regression to see how the the rolling volatility 
# of the japanese yen effects the correlation in exchange rates between the
#yuan and the yen at the quantiles specified in taus and the data provided 
#in the r_corr_vol data frame using the log of the correlation and the log
#of the volatility to transform into linear data
fit.rq.CNY.JPY <- rq(log(CNY.JPY) ~ log(JPY.vol), 
    tau = taus, data = r_corr_vol)
## Warning in log(CNY.JPY): NaNs produced
#Creating a least ordinary squares regression on the yuan and the yen
fit.lm.CNY.JPY <- lm(log(CNY.JPY) ~ log(JPY.vol), 
    data = r_corr_vol)
## Warning in log(CNY.JPY): NaNs produced
#assigning the summary of the quantile regression to a variable
CNY.JPY.summary <- summary(fit.rq.CNY.JPY, 
    se = "boot")
#printing the summary
CNY.JPY.summary
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.05
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -5.07014   0.45854  -11.05704   0.00000
## log(JPY.vol)  -0.96318   0.54793   -1.75784   0.07910
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.1
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -3.80541   0.30005  -12.68274   0.00000
## log(JPY.vol)  -0.05799   0.37920   -0.15293   0.87849
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.15
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -3.28519   0.15596  -21.06453   0.00000
## log(JPY.vol)   0.10332   0.27973    0.36936   0.71194
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.2
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -3.03994   0.08162  -37.24379   0.00000
## log(JPY.vol)  -0.10061   0.20468   -0.49156   0.62314
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.25
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.89330   0.07855  -36.83587   0.00000
## log(JPY.vol)  -0.35602   0.13039   -2.73046   0.00644
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.3
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.68656   0.12281  -21.87529   0.00000
## log(JPY.vol)  -0.28125   0.15161   -1.85505   0.06391
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.35
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.45886   0.08817  -27.88924   0.00000
## log(JPY.vol)  -0.10791   0.12079   -0.89338   0.37188
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.4
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.34030   0.11171  -20.95037   0.00000
## log(JPY.vol)  -0.06848   0.12972   -0.52789   0.59770
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.45
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.08348   0.07583  -27.47654   0.00000
## log(JPY.vol)   0.08756   0.08610    1.01685   0.30949
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.5
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -1.91646   0.06675  -28.71262   0.00000
## log(JPY.vol)   0.18794   0.07020    2.67711   0.00756
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.55
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -1.77384   0.06763  -26.23026   0.00000
## log(JPY.vol)   0.26318   0.08996    2.92551   0.00352
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.6
## 
## Coefficients:
##              Value    Std. Error t value  Pr(>|t|)
## (Intercept)  -1.63052  0.24564   -6.63770  0.00000
## log(JPY.vol)  0.25338  0.23768    1.06604  0.28668
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.65
## 
## Coefficients:
##              Value    Std. Error t value  Pr(>|t|)
## (Intercept)  -0.98780  0.14145   -6.98360  0.00000
## log(JPY.vol)  0.67504  0.18330    3.68277  0.00024
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.7
## 
## Coefficients:
##              Value    Std. Error t value  Pr(>|t|)
## (Intercept)  -0.62254  0.20514   -3.03473  0.00247
## log(JPY.vol)  0.78950  0.28758    2.74532  0.00616
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.75
## 
## Coefficients:
##              Value    Std. Error t value  Pr(>|t|)
## (Intercept)  -0.38695  0.08023   -4.82296  0.00000
## log(JPY.vol)  0.96979  0.15175    6.39057  0.00000
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.8
## 
## Coefficients:
##              Value    Std. Error t value  Pr(>|t|)
## (Intercept)  -0.39138  0.04255   -9.19724  0.00000
## log(JPY.vol)  0.83000  0.12916    6.42604  0.00000
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.85
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -0.37793   0.03442  -10.97897   0.00000
## log(JPY.vol)   0.59616   0.07248    8.22527   0.00000
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.9
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -0.35178   0.01828  -19.24570   0.00000
## log(JPY.vol)   0.55875   0.05244   10.65520   0.00000
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.95
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -0.35746   0.00609  -58.66692   0.00000
## log(JPY.vol)   0.44931   0.01681   26.72271   0.00000
#Plotting the intercept and the slope determined by the quantile
# regression for each quantile specified in taus
plot(CNY.JPY.summary)
## Warning in log(CNY.JPY): NaNs produced

Performing a quantile regression will allow us to infer a relationship between the correlation of CNY/JPY and the volatility of the returns for JPY. In the above chunk we perform the quantile regression with the ‘rq’ command, while ‘lm’ creates a simple regression model for the CNY/JPY and JPY.vol, shown in red. ‘taus’ is saved to specify the quantiles in which we will run the regressions. The quantile regression is run with log(CNY.JPY) defining y axis and ~ log(JPY.vol) defined as the independent variable. We will run a regression for every quantile from .05 to .95, in .05 increments. The first plot displays the intercept (y axis) of the accompanying quartile, presented on the x axis. The bottom plot details the similar quantile on the x axis, but the y displays the associated slope, shown in the summary tables. The quantile coefficients tell us that for every quantile increase, .05 to 0.1 and so on, what the change in CNY/JPY correlation means due to JPY volatility . The .1, .4, .5,.6 quantile coefficients all fall within the confidence intervals but all other fall well outside of the confidence intervals. This gives little confidence that a simple linear regression model could predict and support inter-market effects, especially when the correlations are not behaving near the mean value. The data also shows a steady increase in the slope and intercept as our quantiles increase meaning the correlation of CNY and JPY is highly related to the volatility in JPY.

Next we will use the magick package to create an animation that shows the risk relationships year by year. This is done by creating a list of data frames that contain the volatility in the japanese market and the correlation between the yen and the yuan, separated by year.

#library for quantile regression
library(quantreg)
#library for animations
library(magick)
## Linking to ImageMagick 6.9.9.39
## Enabled features: cairo, fontconfig, freetype, lcms, pango, rsvg, webp
## Disabled features: fftw, ghostscript, x11
#Setting the resolution of the image to 96 pixels
img <- image_graph(res = 96)
#Creating a list that has the r_corr_vol data frame
#split up by each yeat
datalist <- split(r_corr_vol, r_corr_vol$year)
#using list apply to apply a custom function for each year
out <- lapply(datalist, function(data) {
  #creating a plot assigned to the p variable with the yen volatility as
  #the x-axis and the yuan and yen correlation y-axis
    p <- ggplot(data, aes(JPY.vol, CNY.JPY)) +
        #making it a scatterplot and titling it the year
        geom_point() + ggtitle(data$year) + 
        #creating a confidence interval with a solid blue line
        #that shows how the 0.05 quantile and 0.95 quantile line
        #looks like
        geom_quantile(quantiles = c(0.05, 
        #Plotting the quantile regression at the median with a 
        #blue londashed line
            0.95)) + geom_quantile(quantiles = 0.5, 
        #Uses contours to show densities from the scatter plot
        linetype = "longdash") + geom_density_2d(colour = "red")
    #print the plot
    print(p)
})
## Warning: Removed 89 rows containing non-finite values (stat_quantile).
## Smoothing formula not specified. Using: y ~ x
## Warning: Removed 89 rows containing non-finite values (stat_quantile).
## Smoothing formula not specified. Using: y ~ x
## Warning: Removed 89 rows containing non-finite values (stat_density2d).
## Warning: Removed 89 rows containing missing values (geom_point).
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
## Smoothing formula not specified. Using: y ~ x
#While there are not an active device where the graphics are occurring
# (should be always) shu down any devices trying to use graphics
while (!is.null(dev.list())) dev.off()
# img <-
# image_background(image_trim(img),
# 'white')
#Create an animation on the image using img created earlier and
#go at a rate of 0.5 frames per second, or change frames 
#every 2 seconds
animation <- image_animate(img, fps = 0.5)
#Show the animation
animation